An algorithm to minimize within-class scatter and to reduce common matrix dimension for image recognition
نویسندگان
چکیده
In this paper, a new algorithm using 2DPCA and Gram-Schmidt Orthogonalization Procedure for recognition of face images is proposed. The algorithm consists of two parts. In the first part, a common feature matrix is obtained; and in the second part, the dimension of the common feature matrix is reduced. Resulting common feature matrix with reduced dimension is used for face recognition. Column and row covariance matrices are obtained by applying 2DPCA on the column and row vectors of images, respectively. The algorithm then applies eigenvalue-eigenvector decomposition to each of these two covariance matrices. Total scatter maximization is achieved taking the projection of images onto d eigenvectors corresponding to the largest d eigenvalues of column covariance matrix, yielding the feature matrix. The each column of the feature matrix represents a feature vector. Minimization of within class scatter is achieved by reducing the redundancy of the corresponding feature vectors of the different images in the same class. A common feature vector for each d eigenvector direction is obtained by applying Gram-Schmidt Orthogonalization Procedure. A common feature matrix is established by gathering d common feature vectors in a matrix form. Then, the dimension of common feature matrix is reduced to d×d taking the projection of common feature matrix onto d eigenvectors which corresponds to the largest d eigenvalues of row covariance matrix. The performance of the proposed algorithm is evaluated experimentally by measuring the recognition rates. The developed algorithm produced better recognition rates compared to Eigenface, Fisherface and 2DPCA methods. Ar-Face and ORL face databases are used in the experimental evaluations.
منابع مشابه
تحلیل ممیز غیرپارامتریک بهبودیافته برای دستهبندی تصاویر ابرطیفی با نمونه آموزشی محدود
Feature extraction performs an important role in improving hyperspectral image classification. Compared with parametric methods, nonparametric feature extraction methods have better performance when classes have no normal distribution. Besides, these methods can extract more features than what parametric feature extraction methods do. Nonparametric feature extraction methods use nonparametric s...
متن کاملVideo-based face recognition in color space by graph-based discriminant analysis
Video-based face recognition has attracted significant attention in many applications such as media technology, network security, human-machine interfaces, and automatic access control system in the past decade. The usual way for face recognition is based upon the grayscale image produced by combining the three color component images. In this work, we consider grayscale image as well as color s...
متن کاملSeparation Between Anomalous Targets and Background Based on the Decomposition of Reduced Dimension Hyperspectral Image
The application of anomaly detection has been given a special place among the different processings of hyperspectral images. Nowadays, many of the methods only use background information to detect between anomaly pixels and background. Due to noise and the presence of anomaly pixels in the background, the assumption of the specific statistical distribution of the background, as well as the co...
متن کاملIterative Weighted Non-smooth Non-negative Matrix Factorization for Face Recognition
Non-negative Matrix Factorization (NMF) is a part-based image representation method. It comes from the intuitive idea that entire face image can be constructed by combining several parts. In this paper, we propose a framework for face recognition by finding localized, part-based representations, denoted “Iterative weighted non-smooth non-negative matrix factorization” (IWNS-NMF). A new cost fun...
متن کاملFeature extraction based on fuzzy 2DLDA
In the paper, fuzzy fisherface is extended to image matrix, namely, the fuzzy 2DLDA (F2DLDA). In the proposed method, we calculate the membership degree matrix by fuzzy K-nearest neighbor (FKNN), and then incorporate the membership degree into the definition of the between-class scatter matrix and the within-class scatter matrix. Finally, we get the fuzzy between-class scatter matrix and fuzzy ...
متن کامل